Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Analysis of hypernetwork characteristics in Tang poems and Song lyrics
WANG Gaojie, YE Zhonglin, ZHAO Haixing, ZHU Yu, MENG Lei
Journal of Computer Applications    2021, 41 (8): 2432-2439.   DOI: 10.11772/j.issn.1001-9081.2020101569
Abstract422)      PDF (1147KB)(381)       Save
At present, there are many research results in Tang poems and Song lyrics from the perspective of literature, but there are few research results in Tang poems and Song lyrics by using the hypergraph based hypernetwork method, and the only researches of this kind are also limited to the study of Chinese character frequency and word frequency. The analysis and study of Tang poems and Song lyrics by using the method of hypernetwork data analysis is helpful to explore the breadth that cannot be reached by the traditional perspective of literature, and to discover the law of word composition laws in literatures and historical backgrounds reflected by Tang poems and Song lyrics. Therefore, based on two ancient text corpuses:Quan Tang Shi and Quan Song Ci, the hypernetworks of Tang poems and Song lyrics were established respectively. In the construction of the hypernetworks, a Tang poem or a Song lyrics was taken as a hyperedge, and the characters in Tang poems or Song lyrics were taken as the nodes within the hyperedge. Then, the topological indexes and network characteristics of the hypernetworks of Tang poems and Song lyrics, such as node hyperdegree, node hyperdegree distribution, hyperedge node degree, and hyperedge node degree distribution, were experimentally analyzed, in order to find out the characters use, word use and aesthetic tendency of poets in Tang dynasty and lyricists in Song dynasty. Finally, based on the works of poems and lyrics of Li Bai, Du Fu, Su Shi and Xin Qiji, the work hypernetworks were constructed, and the relevant network parameters were calculated. The analysis results show that there is a great difference between the maximum and minimum hyperdegrees of the two hypernetwork, and the distribution of the hyperdegrees is approximate to the power-law distribution, which indicates the scale-free property of the two hypernetworks. In addition, the degrees of hyperedge nodes in Tang poem hypernetwork are also have obvious distribution characteristics. In specific, the degrees of hyperedge nodes in Tang poems and Song lyrics are more distributed between 20 and 100, and the degrees of hyperedge nodes in Song lyric hypernetwork are more distributed between 30 and 130. Moreover, it is found that the work hypernetworks have smaller average path length and a larger clustering coefficient, which reflects the small-world characteristics of the work hypernetworks.
Reference | Related Articles | Metrics
Protocol identification approach based on semi-supervised subspace clustering
ZHU Yuna, ZHANG Yutao, YAN Shaoge, FAN Yudan, CHEN Hantuo
Journal of Computer Applications    2021, 41 (10): 2900-2904.   DOI: 10.11772/j.issn.1001-9081.2020122002
Abstract271)      PDF (633KB)(235)       Save
The differences between different protocols are not considered when selecting identification features in the existing statistical feature-based identification methods. In order to solve the problem, a Semi-supervised Subspace-clustering Protocol Identification Approach (SSPIA) was proposed by combining semi-supervised learning and Fuzzy Subspace Clustering (FSC) method. Firstly, the prior constraint condition was obtained by transforming the labeled sample flow into pairwise constraints information. Secondly, the Semi-supervised Fuzzy Subspace Clustering (SFSC) algorithm was proposed on this basis and was used to guide the process of subspace clustering by using the constraint condition. Then, the mapping between class clusters and protocol types was established to obtain the weight coefficient of each protocol feature, and an individualized cryptographic protocol feature library was constructed for subsequent protocol identification. Finally, the clustering effect and identification effect experiments of five typical cryptographic protocols were carried out. Experimental results show that, compared with the traditional K-means method and FSC method, the proposed SSPIA has better clustering effect, and the protocol identification classifier constructed by SSPIA is more accurate, has higher protocol identification rate and lower error identification rate. The proposed SSPIA improves the identification effect based on statistical features.
Reference | Related Articles | Metrics
Fake review detection model based on vertical ensemble Tri-training
YIN Chunyong, ZHU Yuhang
Journal of Computer Applications    2020, 40 (8): 2194-2201.   DOI: 10.11772/j.issn.1001-9081.2019112046
Abstract374)      PDF (1099KB)(319)       Save
In view of the problems that fake reviews mislead users and make their interests suffer losses and the cost of large-scale manual labeling reviews is too high, by using the classification model generated in the previous iteration process to improve the accuracy of detection, a fake review detection model based on Vertical Ensemble Tri-Training (VETT) was proposed. In the model, the user behavior characteristics were combined as features based on the review text characteristics to perform feature extraction. In VETT algorithm, the iterative process was divided into two parts:vertical ensemble within the group and horizontal ensemble between groups. In-group ensemble is to construct an original classifier using the previous iterative models of the classifier, and the inter-group ensemble is to train three original classifiers through the traditional process to obtain the second-generation classifiers after this iteration, thereby improving the accuracy of the labels. Compared with Co-training, Tri-training, PU learning based on Area Under Curve (PU-AUC) and Vertical Ensemble Co-training (VECT) algorithms, VETT algorithm has the maximum value of F1 increased by 6.5, 5.08, 4.27 and 4.23 percentage points respectively. Experimental results show that the proposed VETT algorithm has better classification performance.
Reference | Related Articles | Metrics
Joint super-resolution and deblurring method based on generative adversarial network for text images
CHEN Saijian, ZHU Yuanping
Journal of Computer Applications    2020, 40 (3): 859-864.   DOI: 10.11772/j.issn.1001-9081.2019071205
Abstract651)      PDF (905KB)(407)       Save
Aiming at the difficulty to reconstruct clear high-resolution images from blurred low-resolution images by the existing super-resolution methods, a joint text image joint super-resolution and deblurring method based on Generative Adversarial Network (GAN) was proposed. Firstly, the low-resolution text images with severe blur were focused, and the down-sampling module and the deblurring module were used to generate the generator network. Secondly, the input images were down-sampled by the down-sampling module to generate blurred super-resolution images. Thirdly, the deblurring module was used to reconstruct the clear super-resolution images. Finally, in order to recover the text images better, a joint training loss including super-resolution pixel loss, deblurring pixel loss, semantic layer feature matching loss and adversarial loss was introduced. Extensive experiments on synthetic and real-world images demonstrate that compared with the existing advanced method SCGAN (Single-Class GAN), the proposed method has the Peak Signal-to-Noise Ratio (PSNR), Structural Similarity (SSIM) and OCR (Optical Character Recognition) accuracy improved by 1.52 dB, 0.011 5 and 13.2 percentage points respectively. The proposed method can better deal with degraded text images in real scenes with low computational cost.
Reference | Related Articles | Metrics
Time series motif discovery algorithm based on subsequence full join and maximum clique
ZHU Yuelong, ZHU Xiaoxiao, WANG Jimin
Journal of Computer Applications    2019, 39 (2): 414-420.   DOI: 10.11772/j.issn.1001-9081.2018061326
Abstract515)      PDF (1058KB)(332)       Save
Existing time series motif discovery algorithms have high computational complexity and cannot find multi-instance motifs. To overcome these defects, a Time Series motif discovery algorithm based on Subsequence full Joins and Maximum Clique (TSSJMC) was proposed. Firstly, the fast time series subsequence full join algorithm was used to obtain the distance between all subsequences and generate the distance matrix. Then, the similarity threshold was set, the distance matrix was transformed into the adjacency matrix, and the sub-sequence similarity graph was constructed. Finally, the maximum clique in the similarity graph was extracted by the maximum clique search algorithm, and the time series corresponding to the vertices of the maximum clique were the motifs containing most instances. In the experiments on public time series datasets, TSSJMC algorithm was compared with Brute Force algorithm and Random Projection algorithm which also could find multi-instance motifs in accuracy, efficiency, scalability and robustness. The experimental results demonstrate that compared with Random Projection algorithm, TSSJMC algorithm has obvious advantages in terms of efficiency, scalability and robustness; compared with Bruce Force algorithm, TSSJMC algorithm finds slightly less motif instances, but its efficiency and scalability are better. Therefore, TSSJMC is an algorithm that balances quality and efficiency.
Reference | Related Articles | Metrics
Multi-cell uplink joint power control algorithm for LTE system
ZHANG Roujia, ZHAN Qingxiang, ZHU Yuhang, TAN Guoping
Journal of Computer Applications    2019, 39 (1): 33-38.   DOI: 10.11772/j.issn.1001-9081.2018071624
Abstract472)      PDF (866KB)(230)       Save
Focusing on the issue that traditional open-loop power control algorithm normally aims to increase the throughput and ignores the interference to other cells, to achieve a tradeoff between edge users and whole system performance, an Uplink Joint Power Control algorithm of LTE system (UJPC), was proposed. In the algorithm, single base station and three sectors were adopted as system model, which aimed to maximize proportional fair index of system throughput. Firstly, the corresponding mathematical optimization model was obtained according to two constraints of the minimum Signal-to-Interference plus Noise Ratio (SINR) and the maximum transmit power of users. Then continuous convex approximation method was used to solve optimization problem to get optimal transmission power of all users in each cell. The simulation results show that, compared with open-loop scheme, UJPC can greatly improve spectrum utilization of cell edge while ensuring average spectrum utilization of system and its best performance gain can reach 50%.
Reference | Related Articles | Metrics
Resource allocation optimization method for augment reality applications based on mobile edge computing
YU Yun, LIAN Xiaocan, ZHU Yuhang, TAN Guoping
Journal of Computer Applications    2019, 39 (1): 22-25.   DOI: 10.11772/j.issn.1001-9081.2018071615
Abstract640)      PDF (656KB)(338)       Save

Considering the time delay and the energy consumption of terminal equipment caused by high-speed data transmission and calculation, a transmission scheme with equal power allocation in uplink was proposed. Firstly, based on collaborative properties of Augment Reality (AR) services, a system model for AR characteristics was established. Secondly, system frame structure was analyzed in detail, and the constraints to minimize total energy consumption of system were established. Finally, with the time delay and energy consumption constraints satisfied, a mathematical model of Mobile Edge Computing (MEC) resource optimization based on convex optimization was established to obtain an optimal communication and computing resource allocation scheme. Compared with user independent transmission scheme, the total energy consumption of the proposed scheme with a maximum time delay of 0.1 s and 0.15 s was both 14.6%. The simulation results show that under the same conditions, compared with the optimization scheme based on user independent transmission, the equal power MEC optimization scheme considering cooperative transmission between users can significantly reduce the total energy consumption of system.

Reference | Related Articles | Metrics
k-core filtered influence maximization algorithms in social networks
LI Yuezhi, ZHU Yuanyuan, ZHONG Ming
Journal of Computer Applications    2018, 38 (2): 464-470.   DOI: 10.11772/j.issn.1001-9081.2017071820
Abstract455)      PDF (1080KB)(541)       Save
Concerning the limited influence scope and high time complexity of existing influence maximization algorithms in social networks, a k-core filtered algorithm based on independent cascade model was proposed. Firstly, an existing influence maximization algorithm was introduced, its rank of nodes does not depend on the entire network. Secondly, pre-training was carried out to find the value of k which has the best optimization effect on existing algorithms but has no relation with the number of selected seeds. Finally, the nodes and edges that do not belong to the k-core subgraph were filtered by computing the k-core of the graph, then the existing influence maximization algorithms were applied on the k-core subgraph, thus reducing computational complexity. Several experiments were conducted on datasets with different scale to prove that the k-core filtered algorithm has different optimization effects on different influence maximization algorithms. After combined with k-core filtered algorithm, compared with the original Prefix excluding Maximum Influence Arborescence (PMIA) algorithm, the influence range is increased by 13.89% and the execution time is reduced by as much as 8.34%; compared with the original Core Covering Algorithm (CCA), the influence range has no obvious difference and the execution time is reduced by as much as 28.5%; compared with the original OutDegree algorithm, the influence range is increased by 21.81% and the execution time is reduced by as much as 26.96%; compared with the original Random algorithm, the influence range is increased by 71.99% and the execution time is reduced by as much as 24.21%. Furthermore, a new influence maximization algorithm named GIMS (General Influence Maximization in Social network) was proposed. Compared with PIMA and Influence Rank Influence Estimation (IRIE), it has wider influence range while still keeping execution time at second level. When it was combined with k-core filtered algorithm, the influence range and execution time do not have significant change. The experimiental results show that k-core filtered algorithm can effectively increase the influence ranges of existing algorithms and reduce their execution times; in addition, the proposed GIMS algorithm has wider influence range and better efficiency, and it is more robust.
Reference | Related Articles | Metrics
User identification across multiple social networks based on information entropy
WU Zheng, YU Hongtao, LIU Shuxin, ZHU Yuhang
Journal of Computer Applications    2017, 37 (8): 2374-2380.   DOI: 10.11772/j.issn.1001-9081.2017.08.2374
Abstract667)      PDF (1186KB)(869)       Save
The precision of user identification is low since the subjective weighting algorithms ignore the special meanings and effects of attributes in applications. To solve this problem, an Information Entropy based Multiple Social Networks User Identification Algorithm (IE-MSNUIA) was proposed. Firstly, the data types and physical meanings of different attributes were analyzed, then different similarity calculation methods were correspondingly adopted. Secondly, the weights of attributes were determined according to their information entropies, thus the potential information of each attribute could be fully exploited. Finally, all chosen attributes were integrated to determine whether the account pair was the matched one. Theoretical analysis and experimental results show that, compared with machine learning based algorithms and subjective weighting algorithms, the performance of the proposed algorithm is improved, on different datasets, the average precision of it is up to 97.2%, the average recall of it is up to 94.1%, and the average comprehensive evaluation metric of it is up to 95.6%. The proposed algorithm can accurately identify user accounts across multiple social networks.
Reference | Related Articles | Metrics
Android malware application detection using deep learning
SU Zhida, ZHU Yuefei, LIU Long
Journal of Computer Applications    2017, 37 (6): 1650-1656.   DOI: 10.11772/j.issn.1001-9081.2017.06.1650
Abstract878)      PDF (1160KB)(1289)       Save
The traditional Android malware detection algorithms have low detection accuracy, which can not successfully identify the Android malware by using the technologies of repacking and code obfuscation. In order to solve the problems, the DeepDroid algorithm was proposed. Firstly, the static and dynamic features of Android application were extracted and the Android application features were created by combining static features and dynamic features. Secondly, the Deep Belief Network (DBN) of deep learning algorithm was used to train the collected training set for generating deep learning network. Finally, untrusted Android application was detected by the generated deep learning network. The experimental results show that, when using the same test set, the correct rate of DeepDroid algorithm is 3.96 percentage points higher than that of Support Vector Machine (SVM) algorithm, 12.16 percentage points higher than that of Naive Bayes algorithm, 13.62 percentage points higher than that of K-Nearest Neighbor ( KNN) algorithm. The proposed DeepDroid algorithm has combined the static features and dynamic features of Android application. The DeepDroid algorithm has made up for the disadvantages that code coverage of static detection is not enough and the false positive rate of dynamic detection is high by using the detection method combined dynamic detection and static detection. By using the DBN algorithm in feature recognition, the proposed DeepDroid algorithm has guaranteed high network training speed and high detection accuracy at the same time.
Reference | Related Articles | Metrics
Anti-fingerprinting model of operation system based on network deception
CAO Xu, FEI Jinlong, ZHU Yuefei
Journal of Computer Applications    2016, 36 (3): 661-664.   DOI: 10.11772/j.issn.1001-9081.2016.03.661
Abstract610)      PDF (767KB)(415)       Save
Since traditional host operating system anti-fingerprinting technologies is lack of the ability of integration defense, a Network Deception based operating system Anti-Fingerprinting model (NDAF) was proposed. Firstly, basic working principle was introduced. The deception server made the fingerprint deception template. Each host dynamically changed the protocol stack fingerprint according to the fingerprint deception template, therefore the process of operating system fingerprinting by attacker was misguided. Secondly, a trust management mechanism was proposed to improve the system efficiency. Based on the different degree of threat, different deception strategies were carried out. Experiments show that NDAF makes certain influence on network efficiency, about 11% to 15%. Comparing experiments show that the anti-fingerprinting ability of NDAF is better than typical operating system anti-fingerprinting tools (OSfuscatge and IPmorph). NDAF can effectively increase the security of target network by integration defense and deception defense.
Reference | Related Articles | Metrics
Software reliability prediction model based on grey Elman neural network
CAO Weidong, ZHU Yuanzhi, ZHAI Panpan, WANG Jing
Journal of Computer Applications    2016, 36 (12): 3481-3485.   DOI: 10.11772/j.issn.1001-9081.2016.12.3481
Abstract569)      PDF (756KB)(422)       Save
The current software reliability prediction model has big prediction accuracy fluctuation and poor adaptability in field data of reliability with strong randomness and dynamics. In order to solve the problems, a software reliability prediction model based on grey Elman neural network was proposed. First, the grey GM (1,1) model was used to predict the failure data and weaken its randomness. Then the Elman neural network was utilized to build the model for predicting the residual produced by GM (1,1), and catch the dynamic change rules. Finally, the prediction results of GM (1,1) and Elman neural network residual were combined to get the final prediction outcomes. The simulation experiment was conducted by using field failure data set produced by the flight inquiry system. The gray Elman neural network model was compared with Back-Propagation (BP) neural network model and Elman neural network model, the corresponding Mean Squared Error (MSE) and Mean Relative Error (MRE) of the three models were respectively 105.1, 270.9, 207.5 and 0.0011, 0.0021, 0.0016. The errors of gray Elman neural network prediction model were the minimum. The experimental results show that the proposed gray Elman neural network prediction model has higher prediction accuracy.
Reference | Related Articles | Metrics
Text segmentation based on superpixel fusion
ZHANG Kuang, ZHU Yuanping
Journal of Computer Applications    2016, 36 (12): 3418-3422.   DOI: 10.11772/j.issn.1001-9081.2016.12.3418
Abstract689)      PDF (825KB)(393)       Save
Improving performance of text segmentation is an important problem in text recognition, which is disturbed by complex background and noises in text image. In order to solve the problem, a text segmentation method based on superpixel fusion was proposed. Firstly, the text image was binarized initially and text stroke width was estimated. Then, superpixel segmentation and superpixel fusion were completed in the images. Finally, the local consistence characteristic of the fused superpixel was taken to check the original binary image. The experimental results show that, compared with Maximally Stable Extremal Region (MSER) and Stroke based Superpixel Grouping (SSG), the segmentation precision of the proposed method is improved by 8.00 percentage points and 7.00 percentage points on KAIST Datebase, and the text recognition rate of the proposed method is improved by 5.33 percentage points and 4.88 percentage points on ICDAR2003 Datebase. The proposed method has strong ability of denoising.
Reference | Related Articles | Metrics
Optimized fault tolerance method based on virtualization technology in simulated system
CHEN Zhijia, ZHU Yuanchang, DI Yanqiang, FENG Shaochong
Journal of Computer Applications    2015, 35 (8): 2392-2396.   DOI: 10.11772/j.issn.1001-9081.2015.08.2392
Abstract427)      PDF (741KB)(300)       Save

The breakdown of simulation nodes or shortage of simulation resources cause failure of distributed simulation system and lower down the reliability of simulation system. To improve fault tolerance performance and decrease the overhead of fault tolerance, a fault tolerance method based on virtualization technology was proposed. According to different failure locations, different fault tolerance methods were adopted. The optimization of checkpoint strategy was analyzed and the optimal interval was obtained. Including selection of nodes, the number of copies and the distribution of the copies, three main problems of backup strategy were concluded. The problems were solved through virtualization technology. Fault tolerance strategy based on virtual machine migration was proposed as a complementary of checkpoint strategy and backup strategy to decrease the overhead. The performance of dynamic fault tolerance strategy and normal fault tolerance strategy were evaluated through experiments. The experimental results prove that the proposed fault tolerance trategy is efficient and the overhead is kept at a low level.

Reference | Related Articles | Metrics
microRNA identification method based on feature clustering and random subspace
RUI Zhiliang, ZHU Yuquan, GENG Xia, CHEN Geng
Journal of Computer Applications    2015, 35 (2): 374-377.   DOI: 10.11772/j.issn.1001-9081.2015.02.0374
Abstract488)      PDF (644KB)(367)       Save

As sensitivity and specificity of current microRNA identification methods are not ideal or imbalanced because of emphasizing new features but ignoring weak classification ability and redundancy of features. An ensemble algorithm based on feature clustering and random subspace method was proposed, named CLUSTER-RS. After eliminating some features with weak classification ability using information ratio, the algorithm utilized information entropy to measure feature relevance and grouped the features into clusters. Then it selected the same number of features randomly from each cluster to compose a feature set, which was used to train base classifiers for constituting the final identification model. By tuning parameter and selecting base classifiers to optimize the algorithm, experimental comparison of CLUSTER-RS and five classic microRNA identification methods (Triplet-SVM,miPred,MiPred,microPred,HuntMi) was conducted using latest microRNA dataset. CLUSTER-RS was only inferior to microPred in sensitivity and performed best in specificity, and also had advantage in accuracy and Matthew correlation coefficient. Experiments show that, CLUSTER-RS algorithm achieves good performance and is superior to the rivals in the aspect of balance between sensitivity and specificity.

Reference | Related Articles | Metrics
Time series outlier detection based on sliding window prediction
YU Yufeng ZHU Yuelong WAN Dingsheng GUAN Xingzhong
Journal of Computer Applications    2014, 34 (8): 2217-2220.   DOI: 10.11772/j.issn.1001-9081.2014.08.2217
Abstract301)      PDF (791KB)(905)       Save

To solve data quality problems for hydrological time series analysis and decision-making, a new prediction-based outlier detection algorithm was proposed. The method first split given hydrological time series into subsequences so as to build a forecasting model to predict future values, and then outliers were assumed to take place if the difference between predicted and observed values was above a certain threshold. The setup of sliding window and parameters in the detection algorithm were analyzed, and the corresponding result was validated with the real data. The experimental results show that the proposed algorithm can effectively detect the outliers in time series and improves the sensitivity and specificity to at least 80 percent and 98 percent respectively.

Reference | Related Articles | Metrics
Optimization method of taint propagation analysis based on semantic rules
LIN Wei ZHU Yuefei SHI Xiaolong CAI Ruijie
Journal of Computer Applications    2014, 34 (12): 3511-3514.  
Abstract224)      PDF (620KB)(664)       Save

Time overhead of the taint propagation analysis in the off-line taint analysis is very large, so the research on efficient taint propagation has important significance. In order to solve the problem, an optimization method of taint propagation analysis based on semantic rules was proposed. This method defined semantic description rules for the instruction to describe taint propagation semantics, automatically generated the semantics of assembly instructions by using the intermediate language, and then analyzed taint propagation according to the semantic rules, to avoid the repeated semantic parsing caused by repeating instructions execution in the existing taint analysis method, thus improving the efficiency of taint analysis. The experimental results show that, this method can effectively reduce the time cost of taint propagation analysis, only costs 14% time of the taint analysis based on intermediate language.

Reference | Related Articles | Metrics
Identification of encrypted function in malicious software
CAI Jianzhang WEI Qiang ZHU Yuefei
Journal of Computer Applications    2013, 33 (11): 3239-3243.  
Abstract558)      PDF (773KB)(368)       Save
To resolve that the malware (malicious software) usually avoids security detection and flow analysis through encryption function, this paper proposed a scheme which can identify the encrypted function in malware. The scheme generated the dynamic loop data flow graph by identifying the loop and input/output of the loop in the dynamic trace. Then the sets of input/output were abstracted according to the loop data flow graph, the reference of known encrypted function was designed and the reference whose parameters were elements of the input sets was computed. If the result could match any element of the output sets, then the scheme could conclude the malware encrypts information by the known encrypted function. The experimental results prove that the proposed scheme can analyze the encrypted function of payload in the obfuscated malware.
Related Articles | Metrics
HDFS optimization program based on GE coding
ZHU Yuanyuan WANG Xiaojing
Journal of Computer Applications    2013, 33 (03): 730-733.   DOI: 10.3724/SP.J.1087.2013.00730
Abstract826)      PDF (632KB)(518)       Save
Concerning Hadoop Distributed File System (HDFS) data disaster recovery efficiency and small files, this paper presented an improved solution based on coding and the solution introduced a coding module of erasure GE to HDFS. Different from the multiple-replication strategy adopted by the original system, the module encoded files of HDFS into a great number of slices, and saved them dispersedly into the clusters of the storage system in distributed fashion. The research methods introduced the new concept of the slice, slice was classified and merged to save in the block and the secondary index of slice was established to solve the small files issue. In the case of cluster failure, the original data would be recovered via decoding by collecting any 70% of the slice, the method also introduced the dynamic replication strategies, through dynamically creating and deleting replications to keep the whole cluster in a good load-balancing status and settle the hotspot issues. The experiments on analogous clusters of storage system show the feasibility and advantages of new measures in proposed solution.
Reference | Related Articles | Metrics
Network security risk evaluation model based on grey linguistic variables in mobile bank
SHEN Li-xiang CAO Guo ZHU Yu-guang
Journal of Computer Applications    2012, 32 (11): 3136-3139.   DOI: 10.3724/SP.J.1087.2012.03136
Abstract984)      PDF (554KB)(433)       Save
A multi-person decision method based on the grey additive linguistic variables weighted aggregation operator is presented to solve the Network Security Risk Evaluation problems in mobile bank, in which the attribute values take the form of the grey additive linguistic variables(GALV). Firstly, some properties are defined, such as the concept and the relational calculation rules of grey additive linguistic variables. Then, some operators will be defined to solve the Network Security Risk Evaluation problems in mobile banking, such as grey additive linguistic weighted aggregation operator, and grey additive linguistic ordered weighted aggregation operator. At last, an example involved in mobile bank shows the effectiveness of this method.
Reference | Related Articles | Metrics
Energybalanced adaptive clustering algorithm for wireless sensor network
LV Tao ZHU Qing-xin ZHU Yu-yu
Journal of Computer Applications    2012, 32 (11): 3107-3111.   DOI: 10.3724/SP.J.1087.2012.03107
Abstract1225)      PDF (782KB)(512)       Save
This paper presented an EnergyBalanced Adaptive Clustering Algorithm (EBACA) for Wireless Sensor Network (WSN) based on LEACH and HEED, in which a node, according to its status, independently made its decision to compete for acting as a cluster head. The cluster head selection criteria took account of both random probability and node residual energy, and introduced the combination of the node energy prediction and energy threshold. In order to balance energy consumption of each node, EBACA adjusted time slice to modify working frequency of each node. Furthermore, EBACA used the multihop manner for intercluster data transmission in order to save total energy consumption. A specialized cluster head node was responsible for collecting the data from other cluster head nodes, and then transmitted the aggregated data to the base station. Its objective was to balance the energy consumption and maximize the network lifetime. The analysis and simulation results show that EBACA provides more uniform energy consumption among nodes and can prolong network lifetime compared to LEACH and HEED.
Reference | Related Articles | Metrics
Parameters adjustment in cognition radio spectrum allocation based on game theory
ZHANG Bei-wei HU Kun-yuan ZHU Yun-long
Journal of Computer Applications    2012, 32 (09): 2408-2411.   DOI: 10.3724/SP.J.1087.2012.02408
Abstract1001)      PDF (629KB)(578)       Save
With regard to the dynamic spectrum allocation on wireless cognitive network, a dynamic Bertrand game algorithm of the channel pricing of licensed users was proposed using Bertrand equilibrium. Then, the relationship between stability of Nash equilibrium and speed parameter adjustment was analyzed. Consequently, step response function was utilized to replace the non-concussive process of game, and three-value method was proposed for getting step response parameters. The simulation results show that the proposed algorithm can obtain stable channel price when the value of speed parameter is less than 0. 04. Besides, the feasibility of using a step function to analyze the concussion game process is proved, and this method is convenient for licensed users to make real-time price and bring more economic benefits.
Reference | Related Articles | Metrics
Particle swarm optimization algorithm with composite strategy inertia weight
GAO Zhen-hua MEI Li ZHU Yuan-jian
Journal of Computer Applications    2012, 32 (08): 2216-2218.   DOI: 10.3724/SP.J.1087.2012.02216
Abstract857)      PDF (484KB)(406)       Save
A new Particle Swarm Optimization (PSO) algorithm with linearly decreasing and dynamically changing inertia weight named L-DPSO was presented to solve the problem that the linearly decreasing inertia weight of the PSO cannot match with the nonlinear changing characteristic. The linear strategy of linearly decreasing inertia weight and the nonlinear strategy of dynamically changing inertia weight were used in the algorithm. The weights were given to two methods separately. Using the test functions of Griewank and Rastrigin to compare L-DPSO with linearly decreasing inertia weight (LPSO) and dynamically changing inertia weight (DPSO), the experimental results show that the convergence speed of L-DPSO is obviously superior to LPSO and DPSO, and the convergence accuracy is also increased. At last, the test functions of Griewank and Rastrigin were used to compare L-DPSO with several commonly used inertia weights, and results show that L-DPSO has obvious advantage too.
Reference | Related Articles | Metrics
XML keyword search algorithm based on smallest lowest entity sub-tree interrelated
YAO Quan-zhu YU Xun-bin
Journal of Computer Applications    2012, 32 (04): 1090-1093.   DOI: 10.3724/SP.J.1087.2012.01090
Abstract847)      PDF (788KB)(387)       Save
A query algorithm of semantic relativity was proposed in this paper, with regard to many meaningless nodes contained in the present results of XML keywords retrieval. Based on the characteristics of semi-structure and self-description of XML files, the concept of Smallest Lowest Entity Sub-Tree (SLEST), in which only physical connection exists between keywords, was put forward by making full use of semantic correlation between nodes. Based on Smallest Interrelated Entity Sub-Tree (SIEST), an algorithm, in which the result was represented by SLEST and SIEST instead of Smallest Lowest Common Ancestor (SLCA), was proposed to capture the IDREF relation between keywords. The result shows that the algorithm proposed in this paper can increase the precision of XML keyword retrieval.
Reference | Related Articles | Metrics
Improved algorithm for point cloud data simplification
ZHU Yu KANG Bao-sheng LI Hong-an SHI Fang-ling
Journal of Computer Applications    2012, 32 (02): 521-544.   DOI: 10.3724/SP.J.1087.2012.00521
Abstract1284)      PDF (670KB)(613)       Save
Due to geometrical features always being excessively lost in Kim's simplification process of scattered point cloud, an improved simplification method was proposed. At first, principal curvatures of points in point cloud were estimated by the least square parabolic fitting. Then an error metric based on Hausdorff distance of principal curvature was used to keep and extract the feature points. Finally, through testing and analyzing some measured data with different features, the results show that the proposed method simplifies the point cloud data to a large exntent, and the simplification results are more uniform, and it can fully retain the original point cloud geometry without breaking the small features, and the quality and efficiency are both guaranteed. The method can provide effective data information for three-dimensional reconstruction to save processing time and hardware resources.
Related Articles | Metrics
ZHANG Bei-wei ZHU Yun-long HU Kun-yuan
Journal of Computer Applications    2011, 31 (12): 3184-3186.  
Abstract1415)      PDF (596KB)(905)       Save
A multi-object optimization model was constructed concerning the overall performance optimization problem of cognition radio system in the process of idle bands allocation. The model realized the maximization of system’s total bandwidth benefit and second user’s access fairness. An intelligent optimization algorithm as well as its concrete implementation were given, which is based on Particle Swarm Optimization (PSO). Simulations were conducted to compare the proposed method with the Color-Sensitive Graph Coloring (CSGC) algorithm under the Collaborative-Max-Sum-Reward (CSUM) and CollaborativeMaxProportional-Fair (CMPF) rules, which took system’s total bandwidth benefit, second user’s access fairness and system’s overall performance as the evaluation guidelines. As a result, the proposed method takes a good tradeoff between total system bandwidth benefit and user’s accessing fairness, and has a better overall system performance.
Reference | Related Articles | Metrics
Motivation-based association rule mining
Xu-Hui LIU Shi-Huang SHAO Guang-Zhu YU
Journal of Computer Applications   
Abstract1624)      PDF (518KB)(658)       Save
The existing algorithms for support-based Association Rule Mining (ARM) cannot find the itemsets that are not frequent but have high utility values, while Utility-Based Association Rule Mining (UBARM) cannot find the itemsets whose utility values are not high but the product of the support and utility of the same itemset (defined as motivation) is very large. This paper proposed motivation-based association rule and a down-top algorithm called HM-miner to find all high motivation itemsets efficiently. By integrating the advantages of support and utility, the new measure, i.e., motivation can measure both the statistical and semantic significance of an itemset. HM-miner adopted a new pruning strategy, which was based on the motivation upper bound property, to cut down the search space.
Related Articles | Metrics
Face recognition based on feature weighted and performance comparison
ZHU Yu-lian
Journal of Computer Applications    2005, 25 (11): 2584-2585.  
Abstract1781)      PDF (510KB)(1081)       Save
Current face recognition methods usually do not consider the effects of different facial features.After the roles of different facial features in the course of face recognition were researched,face images were preprocessed by three feature weighted methods respectively,then were recognized by popular face recognition methods of association memory,principal component analysis and Fisher linear discriminant analysis.The experiments on ORL face database show that feature weighted methods are effective and general to face recognition.
Related Articles | Metrics
An efficient algorithm for mining association rules based on concept lattices
XU Quan-qing,ZHU Yu-wen, LIU Wan-chun
Journal of Computer Applications    2005, 25 (08): 1856-1857.   DOI: 10.3724/SP.J.1087.2005.01856
Abstract1111)      PDF (167KB)(1250)       Save
Among some association algorithms, Apriori algorithm should be made better in processing item and transaction sets, for example, its efficiency in creating frequent 2-item L2 is very low. Apriori algorithm was researched and its merits and defects were analysed. A new association rule algorithm, Apriori algorithm based on concept lattices (ACL), was put forward. It adopted concept lattices and equivalence relationship, made use of the theory of rough sets, and then calculated to gain frequent 2-item L_2. Experiment results show that ACL is an efficient algorithm, and outperforms previous methods.
Related Articles | Metrics
Clustering algorithm based on rough set and Cobweb
XU Quan-qing, ZHU Yu-wen, LI liang, LIU Wan-chun
Journal of Computer Applications    2005, 25 (06): 1350-1352.   DOI: 10.3724/SP.J.1087.2005.1350
Abstract1287)      PDF (138KB)(1015)       Save
An efficient algorithm CRSC(a Clustering Algorithm Based On Rough Set and Cobweb) was proposed. Aiming at the shortage of Cobweb and according to some correlative theories, the theory of rough set was imported to solve a best reduced set of attribute-value pairs, and then it was combined with Cobweb algorithm to construct a hierarchical tree. Our experiment study shows that it greatly advances efficiency without losing accuracy compared with previous methods.
Related Articles | Metrics